synthetic media
As Good as a Coin Toss: Human Detection of AI-Generated Content
Membership in ACM includes a subscription to Communications of the ACM (CACM), the computing industry's most trusted source for staying connected to the world of advanced computing. With only a 50-50 chance of detecting synthetic media online, users are more vulnerable than ever to being duped. Advances in generative AI technology have made it easier than ever for anyone to manufacture increasingly realistic synthetic media (colloquially known as deepfakes) at faster speeds, larger scales, and with more customization than ever. This in turn has led to synthetic media increasingly being used for harmful purposes, including disinformation campaigns, nonconsensual pornography, financial fraud, child sexual abuse and exploitation, and espionage. As of today, the principal defense to combat deceptive synthetic media depends in large part on the human observer's perceptual detection capabilities--their ability to visually or auditorily identify AI-generated content when they encounter it. Yet the growing realism of synthetic media impedes this ability, heightening people's vulnerability to weaponized synthetic content. Moreover, people overestimate how capable they are at identifying synthetic media, further exacerbating the problem. As synthetic media continues to advance in sophistication, so too does the threat posed by its growing weaponization, from financial fraud to the production of nonconsensual intimate materials of adults and children.
- North America > United States > District of Columbia > Washington (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Law (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- Government (1.00)
- (2 more...)
Socioeconomic Threats of Deepfakes and the Role of Cyber-Wellness Education in Defense
Due to the limits of science and its steep learning curve, we must rely on the expertise of others to develop our knowledge and skills.26 Toward this end, social media platforms have revolutionized how netizens--users who are actively engaged in online communities--gain knowledge and skills by facilitating the exchange of costless information with the public (for example, followers or influencers). Businesses around the world also use these platforms along with tools based on generative artificial intelligence (GenAI) to craft synthetic media, hoping to grow revenue by attracting more customers and improving their online experience.28 Generative AI tools can empower cyber threats and have cyberpsychological effects on netizens, allowing malicious actors to craft deepfakes in the form of disinformation, misinformation, and malinformation. Service providers not only must enhance GenAI tools to reduce hallucinations, but they also have a statutory duty to mitigate data-driven biases.
- Information Technology > Security & Privacy (1.00)
- Media > News (0.81)
- Education > Curriculum > Health & Wellness Education (0.42)
Google's AI video tool amplifies fears of an increase in misinformation
In both Tehran and Tel Aviv, residents have faced heightened anxiety in recent days as the threat of missile strikes looms over their communities. Alongside the very real concerns for physical safety, there is growing alarm over the role of misinformation, particularly content generated by artificial intelligence, in shaping public perception. GeoConfirmed, an online verification platform, has reported an increase in AI-generated misinformation, including fabricated videos of air strikes that never occurred, both in Iran and Israel. This follows a similar wave of manipulated footage that circulated during recent protests in Los Angeles, which were sparked by a rise in immigration raids in the second-most populous city in the United States. The developments are part of a broader trend of politically charged events being exploited to spread false or misleading narratives.
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.26)
- Asia > Middle East > Iran > Tehran Province > Tehran (0.26)
- North America > United States > California > Los Angeles County > Los Angeles (0.25)
- (3 more...)
- Media > News (1.00)
- Government (1.00)
TRIED: Truly Innovative and Effective AI Detection Benchmark, developed by WITNESS
Anlen, Shirin, Wojciak, Zuzanna
The proliferation of generative AI and deceptive synthetic media threatens the global information ecosystem, especially across the Global Majority. This report from WITNESS highlights the limitations of current AI detection tools, which often underperform in real-world scenarios due to challenges related to explainability, fairness, accessibility, and contextual relevance. In response, WITNESS introduces the Truly Innovative and Effective AI Detection (TRIED) Benchmark, a new framework for evaluating detection tools based on their real-world impact and capacity for innovation. Drawing on frontline experiences, deceptive AI cases, and global consultations, the report outlines how detection tools must evolve to become truly innovative and relevant by meeting diverse linguistic, cultural, and technological contexts. It offers practical guidance for developers, policy actors, and standards bodies to design accountable, transparent, and user-centered detection solutions, and incorporate sociotechnical considerations into future AI standards, procedures and evaluation frameworks. By adopting the TRIED Benchmark, stakeholders can drive innovation, safeguard public trust, strengthen AI literacy, and contribute to a more resilient global information credibility.
- North America > United States (1.00)
- Africa > Ghana (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- (19 more...)
- Instructional Material (0.66)
- Research Report (0.50)
- Media (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- (2 more...)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Applied AI (1.00)
- Information Technology > Communications > Social Media (0.95)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.34)
From Principles to Practices: Lessons Learned from Applying Partnership on AI's (PAI) Synthetic Media Framework to 11 Use Cases
Leibowicz, Claire R., Cardona, Christian H.
2023 was the year the world woke up to generative AI, and 2024 is the year policymakers are responding more firmly. Importantly, this policy momentum is taking place alongside real world creation and distribution of synthetic media. Social media platforms, news organizations, dating apps, image generation companies, and more are already navigating a world of AI-generated visuals and sounds, already changing hearts and minds, as policymakers try to catch up. How, then, can AI governance capture the complexity of the synthetic media landscape? How can it attend to synthetic media's myriad uses, ranging from storytelling to privacy preservation, to deception, fraud, and defamation, taking into account the many stakeholders involved in its development, creation, and distribution? And what might it mean to govern synthetic media in a manner that upholds the truth while bolstering freedom of expression? What follows is the first known collection of diverse examples of the implementation of synthetic media governance that responds to these questions, specifically through Partnership on AI's (PAI) Responsible Practices for Synthetic Media - a voluntary, normative Framework for creating, distributing, and building technology for synthetic media responsibly, launched in February 2023. In this paper, we present a case bank of real world examples that help operationalize the Framework - highlighting areas synthetic media governance can be applied, augmented, expanded, and refined for use, in practice. Read together, the cases emphasize distinct elements of AI policymaking and seven emergent best practices supporting transparency, safety, expression, and digital dignity online: consent, disclosure, and differentiation between harmful and creative use cases.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Russia > North Caucasian Federal District > Chechen Republic (0.04)
- North America > United States > New Hampshire (0.04)
- (4 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- (3 more...)
Deepfakes and Higher Education: A Research Agenda and Scoping Review of Synthetic Media
The pace of the development of Artificial Intelligence (AI) technologies has led to significant concern in many areas of society, including educational contexts. As a result, research agendas on Generative AI (GenAI) in tertiary education have been established (Lodge et al., 2023); however, to date, no review or research agenda has specifically focused on deepfakes in tertiary education. Deepfakes are GenAI outputs which comprise realistic audio, visual, or media outputs that depict false or inaccurate information (Akhtar, 2023). The major consequence of deepfakes is that they can portray an individual doing something or saying something that they have never done, marking an unprecedented shift in the ability to distort reality (Appel & Prietzel, 2022). As tertiary education institutions are centres of learning, the potential implications of such false information are highly important for students, teachers, and university leadership, thus warranting stakeholder attention.
- Europe > United Kingdom (0.14)
- Asia > Vietnam (0.04)
- Asia > India (0.04)
- (8 more...)
- Research Report (1.00)
- Overview (1.00)
- Information Technology > Security & Privacy (1.00)
- Education > Educational Setting > Higher Education (1.00)
As Good As A Coin Toss: Human detection of AI-generated images, videos, audio, and audiovisual stimuli
Cooke, Di, Edwards, Abigail, Barkoff, Sophia, Kelly, Kathryn
As synthetic media becomes progressively more realistic and barriers to using it continue to lower, the technology has been increasingly utilized for malicious purposes, from financial fraud to nonconsensual pornography. Today, the principal defense against being misled by synthetic media relies on the ability of the human observer to visually and auditorily discern between real and fake. However, it remains unclear just how vulnerable people actually are to deceptive synthetic media in the course of their day to day lives. We conducted a perceptual study with 1276 participants to assess how accurate people were at distinguishing synthetic images, audio only, video only, and audiovisual stimuli from authentic. To reflect the circumstances under which people would likely encounter synthetic media in the wild, testing conditions and stimuli emulated a typical online platform, while all synthetic media used in the survey was sourced from publicly accessible generative AI technology. We find that overall, participants struggled to meaningfully discern between synthetic and authentic content. We also find that detection performance worsens when the stimuli contains synthetic content as compared to authentic content, images featuring human faces as compared to non face objects, a single modality as compared to multimodal stimuli, mixed authenticity as compared to being fully synthetic for audiovisual stimuli, and features foreign languages as compared to languages the observer is fluent in. Finally, we also find that prior knowledge of synthetic media does not meaningfully impact their detection performance. Collectively, these results indicate that people are highly susceptible to being tricked by synthetic media in their daily lives and that human perceptual detection capabilities can no longer be relied upon as an effective counterdefense.
- North America > United States > District of Columbia > Washington (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Portugal > Lisbon > Lisbon (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Government (1.00)
- Media (0.93)
- Information Technology > Security & Privacy (0.70)
- Health & Medicine > Therapeutic Area (0.68)
What to Do About the Junkification of the Internet
Earlier this year, sexually explicit images of Taylor Swift were shared repeatedly X. The pictures were almost certainly created with generative-AI tools, demonstrating the ease with which the technology can be put to nefarious ends. This case mirrors many other apparently similar examples, including fake images depicting the arrest of former President Donald Trump, AI-generated images of Black voters who support Trump, and fabricated images of Dr. Anthony Fauci. There is a tendency for media coverage to focus on the source of this imagery, because generative AI is a novel technology that many people are still trying to wrap their head around. But that fact obscures the reason the images are relevant: They spread on social-media networks.
- Law (1.00)
- Government > Voting & Elections (0.70)
- Government > Regional Government > North America Government > United States Government (0.50)
Kate Middleton and the End of Shared Reality
If you're looking for an image that perfectly showcases the confusion and chaos of a choose-your-own-reality information dystopia, you probably couldn't do better than yesterday's portrait of Catherine, Princess of Wales. In just one day, the photograph has transformed from a hastily released piece of public-relations damage control into something of a Rorschach test--a collision between plausibility and conspiracy. For the uninitiated: Yesterday, in celebration of Mother's Day in the U.K., the Royal Family released a portrait on Instagram of Kate Middleton with her three children. But this was no ordinary photo. Middleton has been away from the public eye since December reportedly because of unspecified health issues, leading to a ceaseless parade of conspiracy theories. Royal watchers and news organizations naturally pored over the image, and they found a number of alarming peculiarities.
- Europe > United Kingdom > Wales (0.25)
- Europe > Russia (0.15)
- Asia > Russia (0.15)
- (2 more...)
Outsmarting Deepfake Video
In March 2022, a synthesized video of Ukrainian President Volodymyr Zelenskyy appeared on various social media platforms and a national news website. In the video, Zelenskyy urges his people to surrender in their fight against Russia; however, the speaker is not Zelenskyy at all. The minute-long clip was a deepfake, a synthesized video produced via deep learning models, and the president soon posted a legitimate message reaffirming his nation's commitment to defending its land and people. The Ukrainian government already been had warning the public that state-sponsored deepfakes could be used as part of Russia's information warfare. The video itself was not particularly realistic or convincing, but the quality of deepfakes has been improving rapidly.
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government > Ukraine Government (0.70)